Gradient Methods for Submodular Maximization

نویسندگان

  • S. Hamed Hassani
  • Mahdi Soltanolkotabi
  • Amin Karbasi
چکیده

In this paper, we study the problem of maximizing continuous submodular functions that naturally arise in many learning applications such as those involving utility functions in active learning and sensing, matrix approximations and network inference. Despite the apparent lack of convexity in such functions, we prove that stochastic projected gradient methods can provide strong approximation guarantees for maximizing continuous submodular functions with convex constraints. More specifically, we prove that for monotone continuous DR-submodular functions, all fixed points of projected gradient ascent provide a factor 1/2 approximation to the global maxima. We also study stochastic gradient and mirror methods and show that after O(1/ ) iterations these methods reach solutions which achieve in expectation objective values exceeding ( 2 − ). An immediate application of our results is to maximize submodular functions that are defined stochastically, i.e. the submodular function is defined as an expectation over a family of submodular functions with an unknown distribution. We will show how stochastic gradient methods are naturally well-suited for this setting, leading to a factor 1/2 approximation when the function is monotone. In particular, it allows us to approximately maximize discrete, monotone submodular optimization problems via projected gradient descent on a continuous relaxation, directly connecting the discrete and continuous domains. Finally, experiments on real data demonstrate that our projected gradient methods consistently achieve the best utility compared to other continuous baselines while remaining competitive in terms of computational effort.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Monotone Closure of Relaxed Constraints in Submodular Optimization: Connections Between Minimization and Maximization: Extended Version

It is becoming increasingly evident that many machine learning problems may be reduced to some form of submodular optimization. Previous work addresses generic discrete approaches and specific relaxations. In this work, we take a generic view from a relaxation perspective. We show a relaxation formulation and simple rounding strategy that, based on the monotone closure of relaxed constraints, r...

متن کامل

Scheduled Diversity

We introduce and study minimax curriculum learning (MCL), a new method for adaptively selecting a sequence of training subsets for a succession of stages in machine learning. The subsets are encouraged to be small and diverse early on, and then larger, harder, and allowably more homogeneous in later stages. At each stage, model weights and training sets are chosen by solving a joint continuous-...

متن کامل

Mirror-Descent-like Algorithms for Submodular Optimization

In this paper we develop a framework of submodular optimization algorithms in line with the mirror-descent style of algorithms for convex optimization. We use the fact that a submodular function has both a subdifferential and a superdifferential, which enables us to formulate algorithms for both submodular minimization and maximization. This reveals a unifying framework for a number of submodul...

متن کامل

Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap

In this paper, we study the problem of constrained and stochastic continuous submodular maximization. Even though the objective function is not concave (nor convex) and is defined in terms of an expectation, we develop a variant of the conditional gradient method, called Stochastic Continuous Greedy, which achieves a tight approximation guarantee. More precisely, for a monotone and continuous D...

متن کامل

Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity

Online optimization has been a successful framework for solving large-scale problems under computational constraints and partial information. Current methods for online convex optimization require either a projection or exact gradient computation at each step, both of which can be prohibitively expensive for large-scale applications. At the same time, there is a growing trend of non-convex opti...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017